But often the linearity assumption is good enough.
When it’s not …
polynomials
step functions
spline
local regression, and
generalized additive models
offer a lot of flexibility, without losing the ease and interpretability of linear models.
Main idea
To augment/replace the vector of inputs \(X\) with additional variables, which are transformations of \(X\), and then use linear models in this new space of derived input features.
For example, in regression problems, \(f(X) = \mbox{E}(Y\mid X)\) is modeled as a linear function of \(X\), but now to model is by transformation of \(X\), i.e., \(h_m(X)\).
\[
f(X) =\sum_{m=1}^M \beta_m h_m(X)
\]
The beauty of this approach is that once the basis functions \(h_m\) have been determined, the models are linear in these new variables, and the fitting proceeds as for linear models.
1.1 Popular choices for basis functions \(h_m(X)\)
\(h_m(X) = X_m, m = 1,\ldots,p\) recovers the original linear model
\(h_m(X) = X_j^2\) or \(h_m(X) = X_jX_k\) allows us to augment the inputs with polynomial terms. Note, however, that the number of variables grows exponentially in the degree of the polynomial. A full quadratic model in \(p\) variables requires \(O(p^2)\) square and cross-product terms, or more generally \(O(p^d)\) for a degree-d polynomial.
\(h_m(X) = \mathbf{I}(L_m \leq X_k < U_m)\), an indicator for a region of \(X_k\). By breaking the range of \(X_k\) up into \(M_k\) such nonoverlapping regions results in a model with a piecewise constant contribution for \(X_k\).
library(gtsummary)library(ISLR2)library(tidyverse)# Convert to tibbleWage <-as_tibble(Wage) %>%print(width =Inf)
# A tibble: 3,000 × 11
year age maritl race education region
<int> <int> <fct> <fct> <fct> <fct>
1 2006 18 1. Never Married 1. White 1. < HS Grad 2. Middle Atlantic
2 2004 24 1. Never Married 1. White 4. College Grad 2. Middle Atlantic
3 2003 45 2. Married 1. White 3. Some College 2. Middle Atlantic
4 2003 43 2. Married 3. Asian 4. College Grad 2. Middle Atlantic
5 2005 50 4. Divorced 1. White 2. HS Grad 2. Middle Atlantic
6 2008 54 2. Married 1. White 4. College Grad 2. Middle Atlantic
7 2009 44 2. Married 4. Other 3. Some College 2. Middle Atlantic
8 2008 30 1. Never Married 3. Asian 3. Some College 2. Middle Atlantic
9 2006 41 1. Never Married 2. Black 3. Some College 2. Middle Atlantic
10 2004 52 2. Married 1. White 2. HS Grad 2. Middle Atlantic
jobclass health health_ins logwage wage
<fct> <fct> <fct> <dbl> <dbl>
1 1. Industrial 1. <=Good 2. No 4.32 75.0
2 2. Information 2. >=Very Good 2. No 4.26 70.5
3 1. Industrial 1. <=Good 1. Yes 4.88 131.
4 2. Information 2. >=Very Good 1. Yes 5.04 155.
5 2. Information 1. <=Good 1. Yes 4.32 75.0
6 2. Information 2. >=Very Good 1. Yes 4.85 127.
7 1. Industrial 2. >=Very Good 1. Yes 5.13 170.
8 2. Information 1. <=Good 1. Yes 4.72 112.
9 2. Information 2. >=Very Good 1. Yes 4.78 119.
10 2. Information 2. >=Very Good 1. Yes 4.86 129.
# ℹ 2,990 more rows
# Summary statisticsWage %>%tbl_summary()
Characteristic
N = 3,000
1
year
2003
513 (17%)
2004
485 (16%)
2005
447 (15%)
2006
392 (13%)
2007
386 (13%)
2008
388 (13%)
2009
389 (13%)
age
42 (34, 51)
maritl
1. Never Married
648 (22%)
2. Married
2,074 (69%)
3. Widowed
19 (0.6%)
4. Divorced
204 (6.8%)
5. Separated
55 (1.8%)
race
1. White
2,480 (83%)
2. Black
293 (9.8%)
3. Asian
190 (6.3%)
4. Other
37 (1.2%)
education
1. < HS Grad
268 (8.9%)
2. HS Grad
971 (32%)
3. Some College
650 (22%)
4. College Grad
685 (23%)
5. Advanced Degree
426 (14%)
region
1. New England
0 (0%)
2. Middle Atlantic
3,000 (100%)
3. East North Central
0 (0%)
4. West North Central
0 (0%)
5. South Atlantic
0 (0%)
6. East South Central
0 (0%)
7. West South Central
0 (0%)
8. Mountain
0 (0%)
9. Pacific
0 (0%)
jobclass
1. Industrial
1,544 (51%)
2. Information
1,456 (49%)
health
1. <=Good
858 (29%)
2. >=Very Good
2,142 (71%)
health_ins
1. Yes
2,083 (69%)
2. No
917 (31%)
logwage
4.65 (4.45, 4.86)
wage
105 (85, 129)
1
n (%); Median (Q1, Q3)
# Plot wage ~ age, GAM fit is display when n >1000Wage %>%ggplot(mapping =aes(x = age, y = wage)) +geom_point() +geom_smooth() +labs(title ="Wage changes nonlinearly with age",x ="Age",y ="Wage (k$)")
# Load the pandas libraryimport pandas as pd# Load numpy for array manipulationimport numpy as np# Load seaborn plotting libraryimport seaborn as snsimport matplotlib.pyplot as plt# Set font size in plotssns.set(font_scale =1.2)# Display all columnspd.set_option('display.max_columns', None)# Import Wage dataWage = pd.read_csv("../data/Wage.csv")Wage.info()
# Visualize wage ~ age, display lowess curveplt.figure()sns.lmplot( data = Wage, x ="age", y ="wage", lowess =True, scatter_kws = {'alpha' : 0.1}, height =8 ).set( title ="Wage changes nonlinearly with age", xlabel ='Age', ylabel ='Wage (k$)' );plt.show()
2 Polynomial regression
In most of this lecture, consider \(p=1\) in the following examples. \[
y_i = \beta_0 + \beta_1 x_i + \beta_2 x_i^2 + \cdots + \beta_d x_i^d + \epsilon_i.
\]
# Visualize wage ~ age, display order-4 polynomial fitplt.figure()sns.lmplot( data = Wage, x ="age", y ="wage", # Order-4 polynomial regression order =4, scatter_kws = {'alpha' : 0.1}, height =8 ).set( xlabel ='Age', ylabel ='Wage (k$)', title ='Degree-4 Polynomial' );plt.show()
Create new variables \(X_1 = X\), \(X_2 = X^2\), …, and then treat as multiple linear regression.
Not really interested in the coefficients; more interested in the fitted function values at any value \(x_0\): \[
\hat f(x_0) = \hat{\beta}_0 + \hat{\beta}_1 x_0 + \hat{\beta}_2 x_0^2 + \hat{\beta}_3 x_0^3 + \hat{\beta}_4 x_0^4.
\]
# poly(age, 4) constructs orthogonal polynomial of degree 1 to degree, all orthogonal to the constantlmod <-lm(wage ~poly(age, degree =4), data = Wage)summary(lmod)
# poly(age, 4, raw = TRUE) procudes raw othogonal polynomial, which match Pythonlmod <-lm(wage ~poly(age, degree =4, raw =TRUE), data = Wage)summary(lmod)
Call:
lm(formula = wage ~ poly(age, degree = 4, raw = TRUE), data = Wage)
Residuals:
Min 1Q Median 3Q Max
-98.707 -24.626 -4.993 15.217 203.693
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.842e+02 6.004e+01 -3.067 0.002180 **
poly(age, degree = 4, raw = TRUE)1 2.125e+01 5.887e+00 3.609 0.000312 ***
poly(age, degree = 4, raw = TRUE)2 -5.639e-01 2.061e-01 -2.736 0.006261 **
poly(age, degree = 4, raw = TRUE)3 6.811e-03 3.066e-03 2.221 0.026398 *
poly(age, degree = 4, raw = TRUE)4 -3.204e-05 1.641e-05 -1.952 0.051039 .
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 39.91 on 2995 degrees of freedom
Multiple R-squared: 0.08626, Adjusted R-squared: 0.08504
F-statistic: 70.69 on 4 and 2995 DF, p-value: < 2.2e-16
Since \(\hat f(x_0)\) is a linear function of the \(\hat{\beta}_j\), we can get a simple expression for pointwise-variances\(\operatorname{Var}[\hat f(x_0)]\) at any value \(x_0\).
We either fix the degree \(d\) at some reasonably low value, or use cross-validation to choose \(d\).
Can do separately on several variables. Just stack the variables into one matrix, and separate out the pieces afterwards (see GAMs later).
Polynomial modeling can be done for generalized linear models (logistic regression, Poisson regression, etc) as well.
Caveat: polynomials have notorious tail behavior. Very bad for extrapolation.
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
# Plotplt.figure()ax = sns.scatterplot( data = Wage, x ='age', y ='wage', alpha =0.1);sns.lineplot( x = Wage['age'], y = pipe.predict(X), ax = ax).set( title ="Polynomial regression (order = 4)", xlabel ='Age', ylabel ='Wage (k$)');plt.show()
import statsmodels.api as smimport statsmodels.formula.api as smf# Fit linear regressionlmod = smf.ols(formula ='wage ~ np.vander(age, 5, increasing = True) - 1', data = Wage).fit()lmod.summary()
OLS Regression Results
Dep. Variable:
wage
R-squared:
0.086
Model:
OLS
Adj. R-squared:
0.085
Method:
Least Squares
F-statistic:
70.69
Date:
Tue, 25 Feb 2025
Prob (F-statistic):
2.77e-57
Time:
17:47:50
Log-Likelihood:
-15315.
No. Observations:
3000
AIC:
3.064e+04
Df Residuals:
2995
BIC:
3.067e+04
Df Model:
4
Covariance Type:
nonrobust
coef
std err
t
P>|t|
[0.025
0.975]
np.vander(age, 5, increasing=True)[0]
-184.1542
60.040
-3.067
0.002
-301.879
-66.430
np.vander(age, 5, increasing=True)[1]
21.2455
5.887
3.609
0.000
9.703
32.788
np.vander(age, 5, increasing=True)[2]
-0.5639
0.206
-2.736
0.006
-0.968
-0.160
np.vander(age, 5, increasing=True)[3]
0.0068
0.003
2.221
0.026
0.001
0.013
np.vander(age, 5, increasing=True)[4]
-3.204e-05
1.64e-05
-1.952
0.051
-6.42e-05
1.45e-07
Omnibus:
1097.594
Durbin-Watson:
1.960
Prob(Omnibus):
0.000
Jarque-Bera (JB):
4965.521
Skew:
1.722
Prob(JB):
0.00
Kurtosis:
8.279
Cond. No.
5.67e+08
Notes: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. [2] The condition number is large, 5.67e+08. This might indicate that there are strong multicollinearity or other numerical problems.
Instead of a single polynomial in \(X\) over its whole domain, we can rather use different polynomials in regions defined by knots. E.g., a piecewise cubic polynomial with a single knot at \(c\) takes the form \[
y_i = \begin{cases}
\beta_{01} + \beta_{11} x_i + \beta_{21} x_i^2 + \beta_{31} x_i^3 + \epsilon_i & \text{if } x_i < c \\
\beta_{02} + \beta_{12} x_i + \beta_{22} x_i^2 + \beta_{32} x_i^3 + \epsilon_i & \text{if } x_i \ge c
\end{cases}.
\]
Better to add constraints to the polynomials, e.g., continuity.
Splines have the “maximum” amount of continuity.
3.1 Linear spline
A linear spline with knots at \(\xi_k\), \(k = 1,\ldots,K\), is a piecewise linear polynomial continuous at each knot.
We can represent this model as \[
y_i = \beta_0 + \beta_1 b_1(x_i) + \beta_2 b_2(x_i) + \cdots + \beta_{K+1} b_{K+1}(x_i) + \epsilon_i,
\] where \(b_k\) are basis functions: \[\begin{eqnarray*}
b_1(x_i) &=& x_i \\
b_{k+1}(x_i) &=& (x_i - \xi_k)_+, \quad k=1,\ldots,K.
\end{eqnarray*}\] Here \((\cdot)_+\) means positive part \[
(x_i - \xi_i)_+ = \begin{cases}
x_i - \xi_k & \text{if } x_i > \xi_k \\
0 & \text{otherwise}
\end{cases}.
\]
3.2 Cubic splines
A cubic spline with knots at \(\xi_k\), \(k = 1,\ldots,K\), is a piecewise cubic polynomial with continuous derivatives up to order 2 at each knot.
Again we can represent this model with truncated power basis functions\[
y_i = \beta_0 + \beta_1 b_1(x_i) + \beta_2 b_2(x_i) + \cdots + \beta_{K+3} b_{K+3}(x_i) + \epsilon_i,
\] with \[\begin{eqnarray*}
b_1(x_i) &=& x_i \\
b_2(x_i) &=& x_i^2 \\
b_3(x_i) &=& x_i^3 \\
b_{k+3}(x_i) &=& (x_i - \xi_k)_+^3, \quad k = 1,\ldots,K,
\end{eqnarray*}\] where \[
(x_i - \xi_i)_+^3 = \begin{cases}
(x_i - \xi_k)^3 & \text{if } x_i > \xi_k \\
0 & \text{otherwise}
\end{cases}.
\]
A cubic spline with \(K\) knots costs \(K+4\) parameters or degrees of freedom. That is \(4(K+1)\) polynomial coefficients minus \(3K\) constraints.
While the truncated power basis is conceptually simple, it is not too attractive numerically: powers of large numbers can lead to severe rounding problems. In practice, B-spline basis functions are preferred for their computational efficiency. See ESL Chapter 5 Appendix.
Code
from sklearn.preprocessing import SplineTransformer# Cubic spline for ageX_age = np.array(X['age']).reshape(3000, 1)x_plot = np.linspace(start =15, stop =85, num =70)X_plot = x_plot[:, np.newaxis]bs_plot = SplineTransformer( degree =3,# knots = np.array([25, 40, 60]).reshape(3, 1), n_knots =5, extrapolation ='continue',# include_bias = False ).fit(X_age).transform(X_plot)ns_plot = SplineTransformer( degree =3,# knots = np.array([25, 40, 60]).reshape(3, 1), n_knots =5, extrapolation ='linear',# include_bias = False ).fit(X_age).transform(X_plot) # Plotfig, axes = plt.subplots(ncols =2, figsize = (20, 6))axes[0].plot(x_plot, bs_plot)# axes[0].legend(axes[0].lines, [f"spline {n}" for n in range(4)])axes[0].set_title("B-splines")axes[1].plot(x_plot, ns_plot)# axes[1].legend(axes[0].lines, [f"spline {n}" for n in range(8)])axes[1].set_title("B-splines with linearity at boundary")plt.show()
3.3 Natural cubic splines
Splines can have high variance at the outer range of the predictors.
A natural cubic spline extrapolates linearly beyond the boundary knots. This adds \(4 = 2 \times 2\) extra constraints, and allows us to put more internal knots for the same degrees of freedom as a regular cubic spline.
A natural spline with \(K\) knots has \(K\) degrees of freedom.
library(splines)# Plot wage ~ ageWage %>%ggplot(mapping =aes(x = age, y = wage)) +geom_point(alpha =0.25) +# Cubic splinegeom_smooth(method ="lm",formula = y ~bs(x, knots =c(25, 40, 60)),color ="blue" ) +# Natural cubic splinegeom_smooth(method ="lm",formula = y ~ns(x, knots =c(25, 40, 60)),color ="red" ) +labs(title ="Natural cubic spline fit (red) vs cubic spline fit (blue)",x ="Age",y ="Wage (k$)" )
3.4 Knot placement
One strategy is to decide \(K\), the number of knots, and then place them at appropriate quantiles of the observed \(X\).
In practice users often specify the degree of freedom and let software choose the number of knots and locations.
4 Smoothing splines
Consider this criterion for fitting a smooth function \(g(x)\) to some data: \[
\text{minimize} \quad \sum_{i=1}^n (y_i - g(x_i))^2 + \lambda \int g''(t)^2 \, dt.
\]
The first term is RSS, and tries to make \(g(x)\) match the data at each \(x_i\).
The second term is a roughness penalty and controls how wiggly \(g(x)\) is. It is modulated by the tuning parameters \(\lambda \ge 0\).
The smaller \(\lambda\), the more wiggly the function, eventually interpolating \(y_i\) when \(\lambda = 0\).
As \(\lambda \to \infty\), the function \(g(x)\) becomes linear.
It can be shown that this problem has an explicit, finite-dimensional, unique minimizer which is a natural cubic spline with knots at the unique values of the \(x_i\).
The roughness penalty controls the roughness via \(\lambda\).
Smoothing splines avoid the knot-selection issue, leaving a single \(\lambda\) to be chosen.
The vector of \(n\) fitted values can be written as \(\hat{g}_\lambda = S_\lambda y\), where \(S_{\lambda}\) is an \(n \times n\) matrix (determined by the \(x_i\) and \(\lambda\)).
The effective degrees of freedom are given by \[
\text{df}_{\lambda} = \sum_{i=1}^n S_{\lambda,ii}.
\] Thus we can specify df rather than \(\lambda\).
The leave-one-out (LOO) cross-validated error is given by \[
\text{RSS}_{\text{CV}}(\lambda) = \sum_{i=1}^n \left[ \frac{y_i - \hat{g}_\lambda(x_i)}{1 - S_{\lambda,ii}} \right]^2.
\]
The solution is a natural cubic spline with knots at the unique values of the \(x_i\).
5 Local regression
With a sliding weight function, we fit separate linear fits over the range of \(X\) by weighted least squares.
At \(X=x_0\), \[
\text{minimize} \quad \sum_{i=1}^n K(x_i, x_0) (y_i - \beta_0 - \beta_1 x_i)^2,
\] where \(K\) is a weighting function that assigns heavier weight for \(x_i\) close to \(x_0\) and zero weight for points furthest from \(x_0\).
Locally weighted linear regression: loess function in R and lowess in Python.
Anecdotally, loess gives better appearance, but is \(O(N^2)\) in memory, so does not work for larger data sets.
While all of these choices make some difference, the most important choice is the span \(s\), which is the proportion of points used to compute the local regression at \(x_0\).
The span plays a role like that of the tuning parameter \(\lambda\) in smoothing splines: it controls the flexibility of the non-linear fit.
The smaller the value of \(s\), the more local and wiggly will be our fit; alternatively, a very large value of s will lead to a global fit to the data using all of the training observations.
Cross-validation to choose s, or just specify it directly.
6 Generalized additive model (GAM)
Generalized additive models (GAMs) allows for flexible nonlinearities in several variables, but retains the additive structure of linear models. \[
y_i = \beta_0 + f_1(x_{i1}) + f_2(x_{i2}) + \cdots + f_p (x_{ip}) + \epsilon_i.
\]
We can fit GAM simply using, e.g. natural splines.
Coefficients not that interesting; fitted functions are.
Can mix terms: some linear, some nonlinear, and use ANOVA to compare models.
Can use smoothing splines or local regression as well. In R: gam(wage ~ s(year; df = 5) + lo(age; span = :5) + education).
GAMs are additive, although low-order interactions can be included in a natural way using, e.g. bivariate smoothers or interactions of the form (in R) ns(age, df = 5):ns(year, df = 5).
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
from statsmodels.gam.api import GLMGam, BSplines# Create spline basis for year and agex_spline = Wage[['year', 'age']]bs = BSplines(x_spline, df = [4, 5], degree = [3, 3])# Fit GAMgam_mod = GLMGam.from_formula('wage ~ education', data = Wage, smoother = bs).fit()gam_mod.summary()
Generalized Linear Model Regression Results
Dep. Variable:
wage
No. Observations:
3000
Model:
GLMGam
Df Residuals:
2988
Model Family:
Gaussian
Df Model:
11.00
Link Function:
Identity
Scale:
1238.8
Method:
PIRLS
Log-Likelihood:
-14934.
Date:
Tue, 25 Feb 2025
Deviance:
3.7014e+06
Time:
17:47:51
Pearson chi2:
3.70e+06
No. Iterations:
3
Pseudo R-squ. (CS):
0.3358
Covariance Type:
nonrobust
coef
std err
z
P>|z|
[0.025
0.975]
Intercept
41.7250
5.268
7.921
0.000
31.401
52.049
education[T.2. HS Grad]
10.7413
2.431
4.418
0.000
5.977
15.506
education[T.3. Some College]
23.2067
2.563
9.056
0.000
18.184
28.229
education[T.4. College Grad]
37.8704
2.547
14.871
0.000
32.879
42.862
education[T.5. Advanced Degree]
62.4355
2.764
22.591
0.000
57.019
67.852
year_s0
6.7031
4.972
1.348
0.178
-3.041
16.448
year_s1
5.2035
4.237
1.228
0.219
-3.101
13.508
year_s2
7.5149
2.343
3.208
0.001
2.924
12.106
age_s0
26.8982
7.771
3.461
0.001
11.667
42.129
age_s1
64.4339
5.693
11.318
0.000
53.275
75.592
age_s2
33.2186
9.402
3.533
0.000
14.791
51.646
age_s3
27.9161
11.042
2.528
0.011
6.275
49.557
# Plot smooth componentsfor i in [0, 1]: plt.figure() gam_mod.plot_partial(i, cpr =True) plt.show()
7 Lab
7.1 Polynomial Regression and Step Functions
To predict wage using a fourth-degree polynomial in age: poly(age, 4) in lm
poly function creates orthogonal polynomials, which are uncorrelated and have mean zero. Essentially it means that each column is a linear combination of the variables age, age^2, age^3 and age^4.
Or we can also use poly() to obtain age, age^2, age^3 and age^4 directly, if we prefer. We can do this by using the raw = TRUE argument to the poly() function.
attach(Wage)fit <-lm(wage ~poly(age, 4), data = Wage)coef(summary(fit))
fit2 <-lm(wage ~poly(age, 4, raw = T), data = Wage)coef(summary(fit2))
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.841542e+02 6.004038e+01 -3.067172 0.0021802539
poly(age, 4, raw = T)1 2.124552e+01 5.886748e+00 3.609042 0.0003123618
poly(age, 4, raw = T)2 -5.638593e-01 2.061083e-01 -2.735743 0.0062606446
poly(age, 4, raw = T)3 6.810688e-03 3.065931e-03 2.221409 0.0263977518
poly(age, 4, raw = T)4 -3.203830e-05 1.641359e-05 -1.951938 0.0510386498
# Equilvalent tofit3 <-lm(wage ~ age +I(age^2) +I(age^3) +I(age^4), data = Wage)
In performing a polynomial regression we must decide on the degree of the polynomial to use. One way to do this is by using hypothesis tests.
fit.1<-lm(wage ~ age, data = Wage)fit.2<-lm(wage ~poly(age, 2), data = Wage) fit.3<-lm(wage ~poly(age, 3), data = Wage)fit.4<-lm(wage ~poly(age, 4), data = Wage) fit.5<-lm(wage ~poly(age, 5), data = Wage) anova(fit.1, fit.2, fit.3, fit.4, fit.5)
Analysis of Variance Table
Model 1: wage ~ age
Model 2: wage ~ poly(age, 2)
Model 3: wage ~ poly(age, 3)
Model 4: wage ~ poly(age, 4)
Model 5: wage ~ poly(age, 5)
Res.Df RSS Df Sum of Sq F Pr(>F)
1 2998 5022216
2 2997 4793430 1 228786 143.5931 < 2.2e-16 ***
3 2996 4777674 1 15756 9.8888 0.001679 **
4 2995 4771604 1 6070 3.8098 0.051046 .
5 2994 4770322 1 1283 0.8050 0.369682
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
As an alternative to using hypothesis tests and ANOVA, we could choose the polynomial degree using cross-validation, as discussed in Chapter 5.
We can also use step functions to fit a piecewise-constant function. The cut function is used to create a qualitative variable that represents the age range. The lm function can then be used to fit a step function to the age variable.
fit <-glm(I(wage >250) ~poly(age, 4), data = Wage, family = binomial)
Once again, we make predictions using the predict() function.
agelims <-range(age)age.grid <-seq(from = agelims[1], to = agelims[2])preds <-predict(fit, newdata =list(age = age.grid), se = T)preds <-predict(fit, newdata =list(age = age.grid), type ="response", se = T)
In order to fit a step function, as discussed in Section 7.2, we use the cut() function.
fit <-lm(wage ~cut(age, 4), data = Wage)summary(fit)
Call:
lm(formula = wage ~ cut(age, 4), data = Wage)
Residuals:
Min 1Q Median 3Q Max
-98.126 -24.803 -6.177 16.493 200.519
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 94.158 1.476 63.790 <2e-16 ***
cut(age, 4)(33.5,49] 24.053 1.829 13.148 <2e-16 ***
cut(age, 4)(49,64.5] 23.665 2.068 11.443 <2e-16 ***
cut(age, 4)(64.5,80.1] 7.641 4.987 1.532 0.126
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 40.42 on 2996 degrees of freedom
Multiple R-squared: 0.0625, Adjusted R-squared: 0.06156
F-statistic: 66.58 on 3 and 2996 DF, p-value: < 2.2e-16
7.2 Splines
In order to fit regression splines in R, we use the splines library.
The bs() function generates the entire matrix of bs() basis functions for splines with the specified set of knots.
library(splines)fit <-lm(wage ~bs(age, knots =c(25, 40, 60)), data = Wage)
The df option to bs() can be used in order to produce a spline with a specified degrees of freedom.
dim(bs(age, knots =c(25, 40, 60)))
[1] 3000 6
dim(bs(age, df =6))
[1] 3000 6
attr(bs(age, df =6), "knots")
[1] 33.75 42.00 51.00
In order to instead fit a natural spline, we use the ns() function. Here ns() we fit a natural spline with four degrees of freedom.
fit <-lm(wage ~ns(age, df =4), data = Wage)summary(fit)